Goto

Collaborating Authors

 different author


Residualized Similarity for Faithfully Explainable Authorship Verification

Zeng, Peter, Alipoormolabashi, Pegah, Mun, Jihu, Dey, Gourab, Soni, Nikita, Balasubramanian, Niranjan, Rambow, Owen, Schwartz, H.

arXiv.org Artificial Intelligence

Responsible use of Authorship Verification (AV) systems not only requires high accuracy but also interpretable solutions. More importantly, for systems to be used to make decisions with real-world consequences requires the model's prediction to be explainable using interpretable features that can be traced to the original texts. Neural methods achieve high accuracies, but their representations lack direct interpretability. Furthermore, LLM predictions cannot be explained faithfully -- if there is an explanation given for a prediction, it doesn't represent the reasoning process behind the model's prediction. In this paper, we introduce Residualized Similarity (RS), a novel method that supplements systems using interpretable features with a neural network to improve their performance while maintaining interpretability. Authorship verification is fundamentally a similarity task, where the goal is to measure how alike two documents are. The key idea is to use the neural network to predict a similarity residual, i.e. the error in the similarity predicted by the interpretable system. Our evaluation across four datasets shows that not only can we match the performance of state-of-the-art authorship verification models, but we can show how and to what degree the final prediction is faithful and interpretable.


Estimating the Influence of Sequentially Correlated Literary Properties in Textual Classification: A Data-Centric Hypothesis-Testing Approach

Yoffe, Gideon, Dershowitz, Nachum, Vishne, Ariel, Sober, Barak

arXiv.org Artificial Intelligence

Stylometry aims to distinguish authors by analyzing literary traits assumed to reflect semi-conscious choices distinct from elements like genre or theme. However, these components often overlap, complicating text classification based solely on feature distributions. While some literary properties, such as thematic content, are likely to manifest as correlations between adjacent text units, others, like authorial style, may be independent thereof. We introduce a hypothesis-testing approach to evaluate the influence of sequentially correlated literary properties on text classification, aiming to determine when these correlations drive classification. Using a multivariate binary distribution, our method models sequential correlations between text units as a stochastic process, assessing the likelihood of clustering across varying adjacency scales. This enables us to examine whether classification is dominated by sequentially correlated properties or remains independent. In experiments on a diverse English prose corpus, our analysis integrates traditional and neural embeddings within supervised and unsupervised frameworks. Results demonstrate that our approach effectively identifies when textual classification is not primarily influenced by sequentially correlated literary properties, particularly in cases where texts differ in authorial style or genre rather than by a single author within a similar genre.


Understanding Literary Texts by LLMs: A Case Study of Ancient Chinese Poetry

Zhao, Cheng, Wang, Bin, Wang, Zhen

arXiv.org Artificial Intelligence

The birth and rapid development of large language models (LLMs) have caused quite a stir in the field of literature. Once considered unattainable, AI's role in literary creation is increasingly becoming a reality. In genres such as poetry, jokes, and short stories, numerous AI tools have emerged, offering refreshing new perspectives. However, it's difficult to further improve the quality of these works. This is primarily because understanding and appreciating a good literary work involves a considerable threshold, such as knowledge of literary theory, aesthetic sensibility, interdisciplinary knowledge. Therefore, authoritative data in this area is quite lacking. Additionally, evaluating literary works is often complex and hard to fully quantify, which directly hinders the further development of AI creation. To address this issue, this paper attempts to explore the mysteries of literary texts from the perspective of LLMs, using ancient Chinese poetry as an example for experimentation. First, we collected a variety of ancient poems from different sources and had experts annotate a small portion of them. Then, we designed a range of comprehension metrics based on LLMs to evaluate all these poems. Finally, we analyzed the correlations and differences between various poem collections to identify literary patterns. Through our experiments, we observed a series of enlightening phenomena that provide technical support for the future development of high-level literary creation based on LLMs.


Researchers Adapt AI With Aim to Identify Anonymous Authors

#artificialintelligence

With disinformation on social media a significant problem, the ability to identify authors of malicious articles and the originators of disinformation campaigns could help reduce the threat from such information attacks. At the Black Hat Asia 2020 conference this week, three researchers from Baidu Security, the cybersecurity division of the Chinese technology giant Baidu, presented their approach to identifying authors based on machine learning techniques, such as neural networks. The researchers used 130,000 articles by more than 3,600 authors scraped from eight websites to train a neural network that could identify an author from a group of five possible writers 93% of the time and identify an author from a group of 2,000 possible writers 27% of the time. While the results are not impressive, they do show that identifying the person behind a piece of writing is possible, said Li Yiping, a researcher at Baidu Security, during his presentation on his team's work. "Most fake news is posted anonymously and lacks valid information to identify the author," he said.


Pragmatic Analysis of Crowd-Based Knowledge Production Systems with iCAT Analytics: Visualizing Changes to the ICD-11 Ontology

Pöschko, Jan (Graz University of Technology) | Strohmaier, Markus (Graz University of Technology) | Tudorache, Tania (Stanford University) | Noy, Natalya F. (Stanford University) | Musen, Mark A. (Stanford University)

AAAI Conferences

While in the past taxonomic and ontological knowledge was traditionally produced by small groups of co-located experts, today the production of such knowledge has a radically different shape and form. For example, potentially thousands of health professionals, scientists, and ontology experts will collaboratively construct, evaluate and maintain the most recent version of the International Classification of Diseases (ICD-11), a large ontology of diseases and causes of deaths managed by the World Health Organization. In this work, we present a novel web-based tool — iCAT Analytics — that allows to investigate systematically crowd-based processes in knowledge-production systems. To enable such investigation, the tool supports interactive exploration of pragmatic aspects of ontology engineering such as how a given ontology evolved and the nature of changes, discussions and interactions that took place during its production process. While iCAT Analytics was motivated by ICD-11, it could potentially be applied to any crowd-based ontology-engineering project. We give an introduction to the features of iCAT Analytics and present some insights specifically for ICD-11.